Session G-3

MIMO

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

AMT: Acoustic Multi-target Tracking with Smartphone MIMO System

Chao Liu, Penghao Wang and Ruobing Jiang (Ocean University of China, China); Yanmin Zhu (Shanghai Jiao Tong University, China)

1
Acoustic target tracking has shown great advantages for device-free human-machine interaction over vision/RF based mechanisms. However, existing approaches for portable devices solely track single target, incapable for the ubiquitous and highly challenging multi-target situation such as double-hand multimedia controlling and multi-player gaming. In this paper, we propose AMT, a pioneering smartphone MIMO system to achieve centimeter-level multi-target tracking. Targets' absolute distance are simultaneously ranged by performing multi-lateration locating with multiple speaker-microphone pairs. The unique challenge raised by MIMO is the superposition of multi-source signals due to the cross-correlation among speakers. We tackle this challenge by applying Zadoff-Chu(ZC) sequences with strong auto-correlation and weak cross-correlation. The most distinguishing advantage of AMT lies in the elimination of target raised multipath effect, which is commonly ignored in previous work by hastily assuming targets as particles. Concerning the multipath echoes reflected by each non-particle target, we define the novel concept of primary echo to best represent target movement. AMT then improves tracking accuracy by detecting primary echo and filtering out minor echoes. Implemented on commercial smartphones, AMT achieves on average 1.13 cm and 2.46 cm error for single and double target tracking respectively and on average 97% accuracy for 6 controlling gestures recognition.

Camel: Context-Aware Magnetic MIMO Wireless Power Transfer with In-band Communication

Hao Zhou, Zhao Chen, Wangqiu Zhou, Haisheng Tan, Panlong Yang and Xiang-Yang Li (University of Science and Technology of China, China)

0
Wireless power transfer (e.g., based on RF or magnetic) enables convenient device-charging, and triggers innovative applications that typically call for faster, smarter, economic, and even simultaneous adaptive charging for multiple smart-devices. Designing such a wireless charging system meeting these multi-requirements faces critical challenges, mainly including the better understanding of real-time energy receivers' status and the power-transferring channels, the limited capability and the smart coordination of the transmitters and receivers. In this work, we devise Camel, a context-aware MIMO MRC-WPT (magnetic resonant coupling-based wireless power transfer) system, which enables adaptive charging of multiple devices simultaneously with a novel context sensing scheme. In Camel, we craft an innovative MIMO WPT channels' state estimation and collision-aware in-band parallel communication among multiple transmitters and receivers. We design and implement the Camel prototype and conduct extensive experimental studies. The results validate our design and demonstrate that Camel can support simultaneous charging of as many as 10 devices, high-speed context sensing within 50 milliseconds, and efficient parallel communication among transceivers within proximity of ∼ 0.5m.

DyLoc: Dynamic Localization for Massive MIMO Using Predictive Recurrent Neural Networks

Farzam Hejazi, Katarina Vuckovic and Nazanin Rahnavard (University of Central Florida, USA)

1
This paper presents a data-driven localization framework with high precision in time-varying complex multipath environments, such as dense urban areas and indoors, where GPS and model-based localization techniques come short. We consider the angle-delay profile (ADP), a linear transformation of channel state information (CSI), in massive MIMO systems and show that ADPs preserve users' motion when stacked temporally. We discuss that given a static environment, future frames of ADP time-series are predictable employing a video frame prediction algorithm. We express that a deep convolutional neural network (DCNN) can be employed to learn the background static scattering environment. To detect foreground changes in the environment, corresponding to path blockage or addition, we introduce an algorithm taking advantage of the trained DCNN. Furthermore, we present Dy-Loc, a data-driven framework to recover distorted ADPs due to foreground changes and to obtain precise location estimations. We evaluate the performance of Dy-Loc in several dynamic scenarios employing DeepMIMO dataset [1] to generate geo-tagged CSI datasets for indoor and outdoor environments. We show that previous DCNN-based techniques fail to perform with desirable accuracy in dynamic environments, while Dy-Loc pursues localization precisely. Moreover, simulations show that as the environment gets richer in terms of the number of multipath, Dy-Loc gets more robust to foreground changes.

Session Chair

Francesco Restuccia (Northeastern University)

Session G-4

Theory 1

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

Competing Epidemics on Graphs - Global Convergence and Coexistence

Vishwaraj Doshi, Shailaja Mallick and Do Young Eun (North Carolina State University, USA)

0
The dynamics of the spread of contagions such as viruses, infectious diseases or even rumors/opinions over contact networks (graphs) have effectively been captured by the well known Susceptible-Infected-Susceptible (SIS) epidemic model in recent years. When it comes to competition between two such contagions spreading on overlaid graphs, their propagation is captured by so-called bi-virus epidemic models. Analysis of such dynamical systems involve the identification of equilibrium points and its convergence properties, which determine whether either of the viruses dies out, or both survive together. We demonstrate how the existing works are unsuccessful in characterizing a large subset of the model parameter space, including all parameters for which the competitiveness of the bi-virus system is significant enough to attain coexistence of the epidemics. In this paper, we fill in this void and obtain convergence results for the entirety of the model parameter space; giving precise conditions (necessary and sufficient) under which the system globally converges to a trichotomy of possible outcomes: a virus-free state, a single-virus state, and to a coexistence state - the first such result.

A Worst-Case Approximate Analysis of Peak Age-of-Information Via Robust Queueing Approach

Zhongdong Liu (Virginia Tech, USA); Yu Sang (Temple University, USA); Bin Li (University of Rhode Island, USA); Bo Ji (Virginia Tech, USA)

0
A new timeliness metric, called Age-of-Information (AoI), has recently attracted a lot of research interests for real-time applications with information updates. It has been extensively studied for various queueing models based on the probabilistic approaches, where the analyses heavily depend on the properties of specific distributions (e.g., the memoryless property of the exponential distribution or the i.i.d. assumption). In this work, we take an alternative new approach, the robust queueing approach, to analyze the Peak Age-of-Information (PAoI). Specifically, we first model the uncertainty in the stochastic arrival and service processes using uncertainty sets. This enables us to approximate the expected PAoI performance for very general arrival and service processes, including those exhibiting heavy-tailed behaviors or correlations, where traditional probabilistic approaches cannot be applied. We then derive a new bound on the PAoI in the single-source single-server setting. Furthermore, we generalize our analysis to two-source single-server systems with symmetric arrivals, which involves new challenges (e.g., the service times of the updates from two sources are coupled in one single uncertainty set). Finally, through numerical experiments, we show that our new bounds provide a good approximation for the expected PAoI. Compared to some well-known bounds in the literature (e.g., one based on Kingman's bound under the i.i.d. assumption) that tends to be inaccurate under light load, our new approximation is accurate under both light and high loads, both of which are critical scenarios for the AoI performance.

Comparison of Decentralized and Centralized Update Paradigms for Remote Tracking of Distributed Dynamic Sources

Sunjung Kang and Atilla Eryilmaz (The Ohio State University, USA); Changhee Joo (Korea University, Korea (South))

0
In this work, we perform a comparative study of centralized and decentralized update strategies for the basic remote tracking problem of many distributed users/devices with randomly evolving states. Our goal is to reveal the impact of the fundamentally different tradeoffs that exist between information accuracy and communication cost under these two update paradigms. In one extreme, decentralized updates are triggered by distributed users/transmitters based on exact local state-information, but also at a higher cost due to the need for uncoordinated multi-user communication. In the other extreme, centralized updates are triggered by the common tracker/receiver based on estimated global state-information, but also at a lower cost due to the capability of coordinated multi-user communication. We use a generic superlinear function to model the communication cost with respect to the number of simultaneous updates for multiple sources. We characterize the conditions under which transmitter-driven decentralized update policies outperform their receiver-driven centralized counterparts for symmetric sources, and vice versa. Further, we extend the results to a scenario where system parameters are unknown and develop learning-based update policies that asymptotically achieve the minimum cost levels attained by the optimal policies.

Looking for the Maximum Independent Set: A New Perspective on the Stable Path Problem

Yichao Cheng and Ning Luo (Yale University, USA); Jingxuan Zhang (Tongji University, China); Timos Antonopoulos, Ruzica Piskac and Qiao Xiang (Yale University, USA)

0
The stable path problem (SPP) is a unified model for analyzing the convergence of distributed routing protocols (e.g., BGP), and a foundation for many network verification tools. Although substantial progress has been made on finding solutions (i.e., stable path assignments) for particular subclasses of SPP instances and analyzing the relation between properties of SPP instances and the convergence of corresponding routing policies, the non-trivial challenge of finding stable path assignments to generic SPP instances still remains. Tackling this challenge is important because it can enable multiple important, novel routing use cases. To fill this gap, in this paper we introduce a novel data structure called solvability digraph, which encodes key properties about stable path assignments in a compact graph representation. Thus SPP is equivalently transformed to the problem of finding in the solvability digraph a maximum independent set (MIS) of size equal to the number of autonomous systems (ASes) in the given SPP instance. We leverage this key finding to develop a heuristic polynomial algorithm GREEDYMIS that solves strictly more SPP instances than state-of-the-art heuristics. We apply GREEDYMIS to designing two important, novel use cases: (1) a centralized interdomain routing system that uses GREEDYMIS to compute paths for ASes and (2) a secure multi-party computation (SMPC) protocol that allows ASes to use GREEDYMIS collaboratively to compute paths without exposing their routing preferences. We demonstrate the benefits and efficiency of these use cases via evaluation using real-world datasets.

Session Chair

Ori Rottenstreich (Technion, Israel)

Session G-5

Theory 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

Sequential Resource Access: Theory and Algorithm

Lin Chen (Sun Yat-sen University, China); Anastasios Giovanidis (Sorbonne Université & CNRS-LIP6, France); Wei Wang (Zhejiang University, China); Shan Lin (Stony Brook University, USA)

3
We formulate and analyze a generic sequential resource access problem arising in a variety of engineering fields, where a user disposes a number of heterogeneous computing, communication, or storage resources, each characterized by the probability of successfully executing the user's task and the related access delay and cost, and seeks an optimal access strategy to maximize her utility within a given time horizon, defined as the expected reward minus the access cost. We develop an algorithmic framework on the (near-)optimal sequential resource access strategy. We first prove that the problem of finding an optimal strategy is NP-hard in general. Given the hardness result, we present a greedy strategy implementable in linear time, and establish the closed-form sufficient condition for its-optimality. We then develop a series of polynomial-time approximation algorithms achieving (?, ?)-optimality. The key components in our design include a pruning process eliminating dominated strategies, thus maintaining polynomial-time and space overhead, and a comprehensive scheme allowing flexibly trading-off time and space overhead against performance guarantee.

Optimal Online Balanced Graph Partitioning

Maciej Pacut, Mahmoud Parham and Stefan Schmid (University of Vienna, Austria)

1
Distributed applications generate a significant amount of network traffic. By collocating frequently communicating nodes (e.g., virtual machines) on the same clusters (e.g., server or rack), we can reduce the network load and improve application performance. However, the communication pattern of different applications is often unknown a priori and may change over time, hence it needs to be learned in an online manner. This paper revisits the online balanced partitioning problem that asks for an algorithm that strikes an optimal tradeoff between the benefits of collocation (i.e., lower network load) and its costs (i.e., migrations). Our first contribution is a significantly improved deterministic lower bound of Ω(k · l) on the competitive ratio, where l is the number of clusters and k is the cluster size, even for a scenario in which the communication pattern is static and can be perfectly partitioned; we also provide an asymptotically tight upper bound of O(k·l) for this scenario. For k = 3, we contribute an asymptotically tight upper bound of Θ(l) for the general model in which the communication pattern can change arbitrarily over time. We improve the result for k = 2 by providing a strictly 6-competitive upper bound for the general model.

Combining Regularization with Look-Ahead for Competitive Online Convex Optimization

Ming Shi and Xiaojun Lin (Purdue University, USA); Lei Jiao (University of Oregon, USA)

0
There has been significant interest in leveraging limited look-ahead to achieve low competitive ratios for online convex optimization (OCO). However, existing online algorithms (such as Averaging Fixed Horizon Control (AFHC)) that can leverage look-ahead to reduce the competitive ratios still produce competitive ratios that grow unbounded as the coefficient ratio (i.e., the maximum ratio of the switching-cost coefficient and the service-cost coefficient) increases. On the other hand, the regularization method can attain a competitive ratio that remains bounded when the coefficient ratio is large, but it does not benefit from look-ahead. In this paper, we propose a new algorithm, called Regularization with Look-Ahead (RLA), that can get the best of both AFHC and the regularization method, i.e., its competitive ratio decreases with the look-ahead window size when the coefficient ratio is small, and remains bounded when the coefficient ratio is large. We also provide a matching lower bound for the competitive ratios of all online algorithms with look-ahead, which differs from the achievable competitive ratio of RLA by a factor that only depends on the problem size. The competitive analysis of RLA involves a non-trivial generalization of online primal-dual analysis to the case with look-ahead.

ITE: A Structural Entropy Based Approach for Source Detection

Chong Zhang, Qiang Guo, Luoyi Fu and Xiaoying Gan (Shanghai Jiao Tong University, China); Xinbing Wang (Shanghai Jiaotong University, China)

0
This paper studies the problem of source detection, which is to infer the source node out of an aftermath of a cascade, i.e., the observed infected graph G N of the network at some time. Prior arts have adopted various statistical quantities such as degree, distance or infection size to reflect the structural centrality of the source. In this paper, we propose a new metric which we call the infected tree entropy (ITE), to utilize richer underlying structural features for source detection. Our idea of ITE is inspired by the conception of structural entropy [1], which demonstrated that the minimization of average bits to encode the network structures with different partitions is the principle for detecting the natural or true structures in real-world networks. Accordingly, our proposed ITE based estimator for the source tries to minimize the coding of network partitions brought by the infected tree rooted at all the potential sources, thus minimizing the structural deviation between the cascades from the potential sources and the actual infection process included in G N . On polynomially growing geometric trees, with increasing tree heterogeneity, the ITE estimator remarkably yields more reliable detection under only moderate infection sizes. In contrast, for regular expanding trees, we still observe guaranteed detection probability of ITE estimator even with an infinite infection size, thanks to the degree regularity property. We also algorithmically realize the ITE based detection that enjoys linear time complexity via a message-passing scheme, and further extend it to general graphs. Experiments on various network topologies confirm the superiority of ITE to the baselines. For example, ITE returns an accuracy of 75% ranking the source among top 5%, far exceeding 45% of the classic algorithms on scale-free networks.

Session Chair

Weili (lily) Wu (U. Texas Dallas)

Session G-6

Classification

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

Learning the unknown: Improving modulation classification performance in unseen scenarios

Erma Perenda and Sreeraj Rajendran (KU Leuven, Belgium); Gérôme Bovet (Armasuisse, Switzerland); Sofie Pollin (KU Leuven, Belgium); Mariya Zheleva (UAlbany SUNY, USA)

2
Automatic Modulation Classification (AMC) is significant for the practical support of a plethora of emerging spectrum applications, such as Dynamic Spectrum Access (DSA) in 5G and beyond, resource allocation, jammer identification, intruder detection, and in general, automated interference analysis. Although a well-known problem, most of the existing AMC work has been done under the assumption that the classifier has prior knowledge about the signal and channel parameters. This paper shows that unknown signal and channel parameters significantly degrade the performance of two of the most popular research streams in modulation classification: expert feature-based and data-driven. By understanding why and where those methods fail, in such unknown scenarios, we propose two possible directions to make AMC more robust to signal shape transformations introduced by unknown signal and channel parameters. We show that Spatial Transformer Networks (STN) and Transfer Learning (TL) embedded into a light ResNeXt-based classifier can improve average classification accuracy up to 10-30% for specific unseen scenarios with only 5% labeled data for a large dataset of 20 complex higher-order modulations.

Can You Fix My Neural Network? Real-Time Adaptive Waveform Synthesis for Resilient Wireless Signal Classification

Salvatore D'Oro, Francesco Restuccia and Tommaso Melodia (Northeastern University, USA)

1
Due to the sheer scale of the Internet of Things (IoT) and 5G, the wireless spectrum is becoming severely congested. For this reason, wireless devices will need to continuously adapt to current spectrum conditions by changing their communication parameters in real-time. Therefore, wireless signal classification (WSC) will become a compelling necessity to decode fast-changing signals from dynamic transmitters. Thanks to its capability of classifying complex phenomena without explicit mathematical modeling, deep learning (DL) has been demonstrated to be a key enabler of WSC. Although DL can achieve a very high accuracy under certain conditions, recent research has unveiled that the wireless channel can disrupt the features learned by the DL model during training, thus drastically reducing the classification performance in real-world live settings. Since retraining classifiers is cumbersome after deployment, existing work has leveraged the usage of carefully-tailored Finite Impulse Response (FIR) filters that, when applied at the transmitter's side, can restore the features that are lost because of the the channel actions, i.e., waveform synthesis. However, these approaches compute FIRs using offline optimization strategies, which limits their efficacy in highly-dynamic channel settings. In this paper, we improve the state of the art by proposing Chares, a Deep Reinforcement Learning (DRL)-based framework for channel-resilient adaptive waveform synthesis. Chares adapts to new and unseen channel conditions by optimally computing through DRL the FIRs in real-time. Chares is a DRL agent whose architecture is-based upon the Twin Delayed Deep Deterministic Policy Gradients (TD3), which requires minimal feedback from the receiver and explores a continuous action space for best performance. Chares has been extensively evaluated on two well-known datasets with an extensive number of channels. We have also evaluated the real-time latency of Chares with an implementation on field-programmable gate array (FPGA). Results show that Chares increases the accuracy up to 4.1x when no waveform synthesis is performed, by 1.9x with respect to existing work, and can compute new actions within 41µs.

Adaptive Clustering-based Malicious Traffic Classification at the Network Edge

Alec F Diallo (The University of Edinburgh, United Kingdom (Great Britain)); Paul Patras (University of Edinburgh, United Kingdom (Great Britain))

1
The rapid uptake of digital services and Internet of Things (IoT) technology gives rise to unprecedented numbers and diversification of cyber attacks, with which commonly-used rule-based Network Intrusion Detection Systems (NIDSs) are struggling to cope. Therefore, Artificial Intelligence (AI) is being exploited as second line of defense, since this methodology helps in extracting non-obvious patterns from network traffic and subsequently in detecting more confidently new types of threats. Cybersecurity is however an arms race and intelligent solutions face renewed challenges as attacks evolve while network traffic volumes surge. In this paper, we propose Adaptive Clustering-based Intrusion Detection (ACID), a novel approach to malicious traffic classification and a valid candidate for deployment at the network edge. ACID addresses the critical challenge of sensitivity to subtle changes in traffic features, which routinely leads to misclassification. We circumvent this problem by relying on low-dimensional embeddings learned with a lightweight neural model comprising multiple kernel networks that we introduce, which optimally separates samples of different classes. We empirically evaluate our approach with both synthetic and three intrusion detection datasets spanning 20 years, and demonstrate ACID consistently attains 100% accuracy and F1-score, and 0% false alarm rate, thereby significantly outperforming state-of-the-art clustering methods and NIDSs.

Robust Online Learning against Malicious Manipulation with Application to Network Flow Classification

Yupeng Li and Ben Liang (University of Toronto, Canada); Ali Tizghadam (TELUS Communications, Canada)

1
Malicious data manipulation reduces the effectiveness of machine learning techniques, which rely on accurate knowledge of the input data. Motivated by real-world applications in network flow classification, we address the problem of robust online learning with delayed feedback in the presence of malicious data generators that attempt to gain favorable classification outcome by manipulating the data features. We propose online algorithms termed ROLC-NC and ROLC-C when the malicious data generators are non-clairvoyant and clairvoyant, respectively. We derive regret bounds for both algorithms and show that they are sub-linear under mild conditions. We further evaluate the proposed algorithms in network flow classification via extensive experiments using real-world data traces. Our experimental results demonstrate that both algorithms can approach the performance of an optimal static offline classifier that is not under attack, while outperforming the same offline classifier when tested with a mixture of normal and manipulated data.

Session Chair

Qiben Yan (Michigan State University)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.